Ab>initio, Big Data, Informatica, Tableau, Data Architect, Cognos, Microstrategy, Healther Business Analysts, Cloud etc.
at Exusia

About Exusia
About
Connect with the team
Similar jobs
Secondary Skills: Streaming, Archiving , AWS / AZURE / CLOUD
Role:
· Should have strong programming and support experience in Java, J2EE technologies
· Should have good experience in Core Java, JSP, Sevlets, JDBC
· Good exposure in Hadoop development ( HDFS, Map Reduce, Hive, HBase, Spark)
· Should have 2+ years of Java experience and 1+ years of experience in Hadoop
· Should possess good communication skills
Responsibilities:
• Build customer facing solution for Data Observability product to monitor Data Pipelines
• Work on POCs to build new data pipeline monitoring capabilities.
• Building next-generation scalable, reliable, flexible, high-performance data pipeline capabilities for ingestion of data from multiple sources containing complex dataset.
•Continuously improve services you own, making them more performant, and utilising resources in the most optimised way.
• Collaborate closely with engineering, data science team and product team to propose an optimal solution for a given problem statement
• Working closely with DevOps team on performance monitoring and MLOps
Required Skills:
• 3+ Years of Data related technology experience.
• Good understanding of distributed computing principles
• Experience in Apache Spark
• Hands on programming with Python
• Knowledge of Hadoop v2, Map Reduce, HDFS
• Experience with building stream-processing systems, using technologies such as Apache Storm, Spark-Streaming or Flink
• Experience with messaging systems, such as Kafka or RabbitMQ
• Good understanding of Big Data querying tools, such as Hive
• Experience with integration of data from multiple data sources
• Good understanding of SQL queries, joins, stored procedures, relational schemas
• Experience with NoSQL databases, such as HBase, Cassandra/Scylla, MongoDB
• Knowledge of ETL techniques and frameworks
• Performance tuning of Spark Jobs
• General understanding of Data Quality is a plus point
• Experience on Databricks,snowflake and BigQuery or similar lake houses would be a big plus
• Nice to have some knowledge in DevOps
Must have experience in BFSC domain.
Exp- Min 3yrs
Location- Pune
Mandatory Skills- Exp in powerBI/Tableau,
SQL
Basic Python
Company Description
At Bungee Tech, we help retailers and brands meet customers everywhere and, on every occasion, they are in. We believe that accurate, high-quality data matched with compelling market insights empowers retailers and brands to keep their customers at the center of all innovation and value they are delivering.
We provide a clear and complete omnichannel picture of their competitive landscape to retailers and brands. We collect billions of data points every day and multiple times in a day from publicly available sources. Using high-quality extraction, we uncover detailed information on products or services, which we automatically match, and then proactively track for price, promotion, and availability. Plus, anything we do not match helps to identify a new assortment opportunity.
Empowered with this unrivalled intelligence, we unlock compelling analytics and insights that once blended with verified partner data from trusted sources such as Nielsen, paints a complete, consolidated picture of the competitive landscape.
We are looking for a Big Data Engineer who will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them.
You will also be responsible for integrating them with the architecture used in the company.
We're working on the future. If you are seeking an environment where you can drive innovation, If you want to apply state-of-the-art software technologies to solve real world problems, If you want the satisfaction of providing visible benefit to end-users in an iterative fast paced environment, this is your opportunity.
Responsibilities
As an experienced member of the team, in this role, you will:
- Contribute to evolving the technical direction of analytical Systems and play a critical role their design and development
- You will research, design and code, troubleshoot and support. What you create is also what you own.
- Develop the next generation of automation tools for monitoring and measuring data quality, with associated user interfaces.
- Be able to broaden your technical skills and work in an environment that thrives on creativity, efficient execution, and product innovation.
BASIC QUALIFICATIONS
- Bachelor’s degree or higher in an analytical area such as Computer Science, Physics, Mathematics, Statistics, Engineering or similar.
- 5+ years relevant professional experience in Data Engineering and Business Intelligence
- 5+ years in with Advanced SQL (analytical functions), ETL, Data Warehousing.
- Strong knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools and environments, data structures, data modeling and performance tuning.
- Ability to effectively communicate with both business and technical teams.
- Excellent coding skills in Java, Python, C++, or equivalent object-oriented programming language
- Understanding of relational and non-relational databases and basic SQL
- Proficiency with at least one of these scripting languages: Perl / Python / Ruby / shell script
PREFERRED QUALIFICATIONS
- Experience with building data pipelines from application databases.
- Experience with AWS services - S3, Redshift, Spectrum, EMR, Glue, Athena, ELK etc.
- Experience working with Data Lakes.
- Experience providing technical leadership and mentor other engineers for the best practices on the data engineering space
- Sharp problem solving skills and ability to resolve ambiguous requirements
- Experience on working with Big Data
- Knowledge and experience on working with Hive and the Hadoop ecosystem
- Knowledge of Spark
- Experience working with Data Science teams
- Responsible for gathering, crunching, collecting raw data.
- Own all data needs in the growth team and across key cross-functional initiatives from data extraction, dashboard creation, and data analysis.
- Provide insights and recommended actions as response to current situation, trend, as well as preventive one for better preparation.
- Collaborate with data engineering and cross-functional stakeholders to define data requirements, create dashboard, to drive business decisions and optimize business outcomes
- Manage any regular reporting and tracking.
- Deliver analysis, insights, reporting, data marts, and tools to support the business team.
- Build & maintain data mart and dashboards for tracking business OKR and initiatives.
- Ingest both internal and external data to support business needs.
Qualification:
- Bachelor's degree in Engineering, Mathematics, Statistics, Operation Research or other related disciplines.
- Having 2 - 5 years experience will be an advantage, but fresh graduates are welcomed to apply as well.
- Expert in Spreadsheet, SQL and strong experience with Data Visualization and Reporting tools (e.g. Tableau, Google Data Studio)
- Comfortable working independently and collaboratively with minimal guidance.
• Drive the data engineering implementation
• Strong experience in building data pipelines
• AWS stack experience is must
• Deliver Conceptual, Logical and Physical data models for the implementation
teams.
• SQL stronghold is must. Advanced SQL working knowledge and experience
working with a variety of relational databases, SQL query authoring
• AWS Cloud data pipeline experience is must. Data pipelines and data centric
applications using distributed storage platforms like S3 and distributed processing
platforms like Spark, Airflow, Kafka
• Working knowledge of AWS technologies such as S3, EC2, EMR, RDS, Lambda,
Elasticsearch
• Ability to use a major programming (e.g. Python /Java) to process data for
modelling.








